81 research outputs found

    On Minimizing Data-read and Download for Storage-Node Recovery

    Full text link
    We consider the problem of efficient recovery of the data stored in any individual node of a distributed storage system, from the rest of the nodes. Applications include handling failures and degraded reads. We measure efficiency in terms of the amount of data-read and the download required. To minimize the download, we focus on the minimum bandwidth setting of the 'regenerating codes' model for distributed storage. Under this model, the system has a total of n nodes, and the data stored in any node must be (efficiently) recoverable from any d of the other (n-1) nodes. Lower bounds on the two metrics under this model were derived previously; it has also been shown that these bounds are achievable for the amount of data-read and download when d=n-1, and for the amount of download alone when d<n-1. In this paper, we complete this picture by proving the converse result, that when d<n-1, these lower bounds are strictly loose with respect to the amount of read required. The proof is information-theoretic, and hence applies to non-linear codes as well. We also show that under two (practical) relaxations of the problem setting, these lower bounds can be met for both read and download simultaneously.Comment: IEEE Communications Letter

    The Role of Author Identities in Peer Review

    Full text link
    There is widespread debate on whether to anonymize author identities in peer review. The key argument for anonymization is to mitigate bias, whereas arguments against anonymization posit various uses of author identities in the review process. The Innovations in Theoretical Computer Science (ITCS) 2023 conference adopted a middle ground by initially anonymizing the author identities from reviewers, revealing them after the reviewer had submitted their initial reviews, and allowing the reviewer to change their review subsequently. We present an analysis of the reviews pertaining to the identification and use of author identities. Our key findings are: (I) A majority of reviewers self-report not knowing and being unable to guess the authors' identities for the papers they were reviewing. (II) After the initial submission of reviews, 7.1% of reviews changed their overall merit score and 3.8% changed their self-reported reviewer expertise. (III) There is a very weak and statistically insignificant correlation of the rank of authors' affiliations with the change in overall merit; there is a weak but statistically significant correlation with respect to change in reviewer expertise. We also conducted an anonymous survey to obtain opinions from reviewers and authors. The main findings from the 200 survey responses are: (i) A vast majority of participants favor anonymizing author identities in some form. (ii) The "middle-ground" initiative of ITCS 2023 was appreciated. (iii) Detecting conflicts of interest is a challenge that needs to be addressed if author identities are anonymized. Overall, these findings support anonymization of author identities in some form (e.g., as was done in ITCS 2023), as long as there is a robust and efficient way to check conflicts of interest

    When Do Redundant Requests Reduce Latency ?

    Full text link
    Several systems possess the flexibility to serve requests in more than one way. For instance, a distributed storage system storing multiple replicas of the data can serve a request from any of the multiple servers that store the requested data, or a computational task may be performed in a compute-cluster by any one of multiple processors. In such systems, the latency of serving the requests may potentially be reduced by sending "redundant requests": a request may be sent to more servers than needed, and it is deemed served when the requisite number of servers complete service. Such a mechanism trades off the possibility of faster execution of at least one copy of the request with the increase in the delay due to an increased load on the system. Due to this tradeoff, it is unclear when redundant requests may actually help. Several recent works empirically evaluate the latency performance of redundant requests in diverse settings. This work aims at an analytical study of the latency performance of redundant requests, with the primary goals of characterizing under what scenarios sending redundant requests will help (and under what scenarios they will not help), as well as designing optimal redundant-requesting policies. We first present a model that captures the key features of such systems. We show that when service times are i.i.d. memoryless or "heavier", and when the additional copies of already-completed jobs can be removed instantly, redundant requests reduce the average latency. On the other hand, when service times are "lighter" or when service times are memoryless and removal of jobs is not instantaneous, then not having any redundancy in the requests is optimal under high loads. Our results hold for arbitrary arrival processes.Comment: Extended version of paper presented at Allerton Conference 201

    The MDS Queue: Analysing the Latency Performance of Erasure Codes

    Full text link
    In order to scale economically, data centers are increasingly evolving their data storage methods from the use of simple data replication to the use of more powerful erasure codes, which provide the same level of reliability as replication but at a significantly lower storage cost. In particular, it is well known that Maximum-Distance-Separable (MDS) codes, such as Reed-Solomon codes, provide the maximum storage efficiency. While the use of codes for providing improved reliability in archival storage systems, where the data is less frequently accessed (or so-called "cold data"), is well understood, the role of codes in the storage of more frequently accessed and active "hot data", where latency is the key metric, is less clear. In this paper, we study data storage systems based on MDS codes through the lens of queueing theory, and term this the "MDS queue." We analytically characterize the (average) latency performance of MDS queues, for which we present insightful scheduling policies that form upper and lower bounds to performance, and are observed to be quite tight. Extensive simulations are also provided and used to validate our theoretical analysis. We also employ the framework of the MDS queue to analyse different methods of performing so-called degraded reads (reading of partial data) in distributed data storage

    Fundamental Limits on Communication for Oblivious Updates in Storage Networks

    Full text link
    In distributed storage systems, storage nodes intermittently go offline for numerous reasons. On coming back online, nodes need to update their contents to reflect any modifications to the data in the interim. In this paper, we consider a setting where no information regarding modified data needs to be logged in the system. In such a setting, a 'stale' node needs to update its contents by downloading data from already updated nodes, while neither the stale node nor the updated nodes have any knowledge as to which data symbols are modified and what their value is. We investigate the fundamental limits on the amount of communication necessary for such an "oblivious" update process. We first present a generic lower bound on the amount of communication that is necessary under any storage code with a linear encoding (while allowing non-linear update protocols). This lower bound is derived under a set of extremely weak conditions, giving all updated nodes access to the entire modified data and the stale node access to the entire stale data as side information. We then present codes and update algorithms that are optimal in that they meet this lower bound. Next, we present a lower bound for an important subclass of codes, that of linear Maximum-Distance-Separable (MDS) codes. We then present an MDS code construction and an associated update algorithm that meets this lower bound. These results thus establish the capacity of oblivious updates in terms of the communication requirements under these settings.Comment: IEEE Global Communications Conference (GLOBECOM) 201
    • …
    corecore